input gradient
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Heilongjiang Province > Harbin (0.04)
One of the remarkable properties of robust computer vision models is that their input-gradients are often aligned with human perception, referred to in the literature
We first demonstrate theoretically that off-manifold robustness leads input gradients to lie approximately on the data manifold, explaining their perceptual alignment. We then show that Bayes optimal models satisfy off-manifold robustness, and confirm the same empirically for robust models trained via gradient norm regularization, randomized smoothing, and adversarial training with projected gradient descent. Quantifying the perceptual alignment of model gradients via their similarity with the gradients of generative models, we show that off-manifold robustness correlates well with perceptual alignment. Finally, based on the levels of on-and off-manifold robustness, we identify three different regimes of robustness that affect both perceptual alignment and model accuracy: weak robustness, bayes-aligned robustness, and excessive robustness.
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
a284df1155ec3e67286080500df36a9a-Paper.pdf
Recent approaches include priors on the feature attribution of a deep neural network (DNN) into the training process to reduce the dependence on unwanted features. However, until now one needed to trade off high-quality attributions, satisfying desirable axioms, against the time required to compute them. This in turn either led to long training times or ineffective attribution priors.
- Europe > Italy > Marche > Ancona Province > Ancona (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- North America > United States (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- North America > United States (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
Accuracy-Robustness Trade Off via Spiking Neural Network Gradient Sparsity Trail
Nhan, Luu Trong, Duong, Luu Trung, Nam, Pham Ngoc, Thang, Truong Cong
Spiking Neural Networks (SNNs) have attracted growing interest in both computational neuroscience and artificial intelligence, primarily due to their inherent energy efficiency and compact memory footprint. However, achieving adversarial robustness in SNNs, (particularly for vision-related tasks) remains a nascent and underexplored challenge. Recent studies have proposed leveraging sparse gradients as a form of regularization to enhance robustness against adversarial perturbations. In this work, we present a surprising finding: under specific architectural configurations, SNNs exhibit natural gradient sparsity and can achieve state-of-the-art adversarial defense performance without the need for any explicit regularization. Further analysis reveals a trade-off between robustness and generalization: while sparse gradients contribute to improved adversarial resilience, they can impair the model's ability to generalize; conversely, denser gradients support better generalization but increase vulnerability to attacks. Our findings offer new insights into the dual role of gradient sparsity in SNN training.
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Neurology (0.54)
- Information Technology > Security & Privacy (0.46)